low intrinsic dimensionality
Review for NeurIPS paper: Black-Box Optimization with Local Generative Surrogates
Additional Feedback: Introduction: The focus of this work seems to be on cases where the inputs are stochastic and the simulator are stochastic. This would be equally applicable to scenarios which were deterministic but otherwise non differentiable right? One related work that comes to mind is related to sobolev training. It's a bit different in motivation and setup but might be nice to cite. The introduction is generally well motivated and concise.
Reviews: Superposition of many models into one
I appreciate that the authors re-implemented the CIFAR benchmark I had requested. However, I'm still unconvinced of the significance or the originality of the proposed approach. For me, two fundamental issues remain: 1) The proposed approach is conceptually very similar to the masking proposed in Masse et al. (2018) that I mentioned in my review. The only difference is essentially masking with a {1,-1} vector vs. masking with a {0,1} vector. For sufficiently sparse masks (as used in Masse et al.), the latter approach will also produce largely non-overlapping feature subsets for different tasks, so I don't see this as a huge difference.
Multi-fidelity data fusion for the approximation of scalar functions with low intrinsic dimensionality through active subspaces
Romor, Francesco, Tezzele, Marco, Rozza, Gianluigi
Gaussian processes are employed for non-parametric regression in a Bayesian setting. They generalize linear regression, embedding the inputs in a latent manifold inside an infinite-dimensional reproducing kernel Hilbert space. We can augment the inputs with the observations of low-fidelity models in order to learn a more expressive latent manifold and thus increment the model's accuracy. This can be realized recursively with a chain of Gaussian processes with incrementally higher fidelity. We would like to extend these multi-fidelity model realizations to case studies affected by a high-dimensional input space but with low intrinsic dimensionality. In this cases physical supported or purely numerical low-order models are still affected by the curse of dimensionality when queried for responses. When the model's gradient information is provided, the presence of an active subspace can be exploited to design low-fidelity response surfaces and thus enable Gaussian process multi-fidelity regression, without the need to perform new simulations. This is particularly useful in the case of data scarcity. In this work we present a multi-fidelity approach involving active subspaces and we test it on two different high-dimensional benchmarks.
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- Europe > Italy > Friuli Venezia Giulia > Trieste Province > Trieste (0.04)
- Asia > Japan > Honshū > Kantō > Kanagawa Prefecture (0.04)
- Africa > West Africa (0.04)